According to this derivation , a bi - linear optical system could be decomposed into a set of convolution kernels , which could then be convoluted with input pattern , and the resulting set of convolution fields could be superimposed to yield an intensity field 在這里提出了一種基于卷積核的快速稀疏空間光強(qiáng)的光刻仿真計(jì)算方法。一個(gè)雙線性光學(xué)系統(tǒng)分解成為一組空間域卷積核,并通過對版圖的空間域卷積來計(jì)算空間光強(qiáng)。
But its weight adjustment is determined only by its learning rate and the difference between the input pattern and the winner neuron ' s weight . it seems that the som obviously ignores some ( implicit ) correlative relationships during the learning , which actually exist between the input 自組織特征映射權(quán)值的調(diào)整僅考慮了學(xué)習(xí)率及輸入模式與鄰域內(nèi)權(quán)值之間的關(guān)系,忽略了輸入模式分量與全體參與競爭的神經(jīng)元權(quán)值向量間的某種相關(guān)關(guān)系。
The first problem is that how to encode the input pattern . as associative memory neural network require the input signal must be binary code and the character vectors are always real number , we must encode the input signal to be binary code 聯(lián)想記憶網(wǎng)絡(luò)在應(yīng)用中的第一個(gè)問題是由于聯(lián)想記憶網(wǎng)絡(luò)要求輸入信號必須是二值的,而特征矢量通常以實(shí)值分量出現(xiàn),它不能直接用作聯(lián)想記憶網(wǎng)絡(luò)的輸入,因此輸入之前必須編碼,所以第一個(gè)問題是采用何種方法進(jìn)行編碼。
But , such a vectorization will bring at least three potential problems : 1 ) structural or local contextual infor mation may be broken down ; 2 ) the higher the dimension of input pattern , the more me mory space are needed for the weight vector related to a classifier ; 3 ) when the dimension of a vector pattern is very high and while the sample size is small , it is easy to be overtrained 如此轉(zhuǎn)換至少會帶來三個(gè)不足: 1 )空間或結(jié)構(gòu)信息可能會遭到破壞; 2 )由于權(quán)向量的維數(shù)等于輸入模式的維數(shù),當(dāng)輸入模式維數(shù)很大時(shí),權(quán)值的存儲空間相應(yīng)的會很大; 3 )對于大維數(shù)的向量模式,當(dāng)樣本數(shù)不多的時(shí)候,利用線性分類器易導(dǎo)致過擬合。